我们为基于分数的生成模型(SGM)(例如Denoising扩散概率模型(DDPM))提供理论收敛保证,该模型构成了大型现实世界中生成模型的骨干,例如DALL $ \ cdot $ E2。我们的主要结果是,假设有准确的分数估计值,此类SGM可以从本质上有效地从任何现实的数据分布中进行采样。与先前的作品相反,我们的结果(1)以$ l^2 $准确的分数估算(而不是$ l^\ infty $ -CACCRATE)保持; (2)不需要限制性的功能不平等条件,而这些条件排除了实质性的非con虫; (3)在所有相关问题参数中刻度缩放; (4)匹配兰格文扩散离散的最新复杂性保证,前提是得分误差足够小。我们认为这是SGM的经验成功的强有力理论理由。我们还基于严重阻尼的Langevin扩散(CLD)检查SGM。与传统的观点相反,我们提供了证据,表明CLD的使用不会降低SGM的复杂性。
translated by 谷歌翻译
联合学习使用一组技术来有效地在拥有培训数据的几种设备上分发机器学习算法的培训。这些技术严重依赖于降低设备和中央服务器之间的通信成本 - 主要瓶颈。联合学习算法通常采用优化方法:它们是最大程度地减少培训损失的算法。在这项工作中,我们采用贝叶斯的方法来完成训练任务,并提出了Langevin算法的沟通效率变体来采样后验。后一种方法比其优化对应物更强大,并提供了更多关于\ textit {a后验分布的知识。我们在不假设目标分布强烈的对数符号的情况下分析了算法。取而代之的是,我们假设较弱的日志Sobolev不等式,它允许非概念性。
translated by 谷歌翻译
Stein变异梯度下降(SVGD)是一种从目标密度采样的算法,该算法已知,该算法已知到乘法常数。尽管SVGD在实践中是一种流行的算法,但其理论研究仅限于最近的一些作品。我们研究了SVGD在人口限制(即具有无限数量的粒子)中的收敛性,以从不满意Talagrand的不平等T1的非concave目标分布中采样。我们首先建立算法的收敛性。然后,我们建立了一个依赖于维的复杂性,该复杂性是基于二键的Stein差异(KSD)。与现有作品不同,我们不认为KSD沿算法的轨迹界定。我们的方法依靠将SVGD解释为概率度量空间的梯度下降。
translated by 谷歌翻译
我们考虑最小化三个凸功能的总和,其中第一个f是光滑的,第二个f是非平滑且可近的,第三个是与线性操作员L的非光滑近似函数的组成。此模板问题具有许多应用程序,有许多应用程序,有许多应用程序,,具有许多应用程序,,具有许多应用程序。例如,在图像处理和机器学习中。首先,我们为这个问题提出了一种新的原始偶算法,我们称之为PDDY。它是通过将davis-yin分裂应用于原始二重式产品空间中的单调包含的,在特定度量下,操作员在特定度量下是单调的。我们显示了三种现有算法(Condat-VU算法的两种形式) PD3O算法)具有相同的结构,因此PDDY是这种自洽的原始偶算法中的第四个丢失链接。这种表示可以简化收敛分析:它使我们能够总体上得出sublinear收敛速率,而线性收敛导致存在强凸度的存在。此外,在我们的广泛而灵活的分析框架内,我们提出了对算法的新随机概括,其中使用了Friancation降低F梯度的随机估计值,而不是真实的梯度。此外,我们作为pddy的特殊情况获得了线性收敛算法,用于在线性约束下最小化强凸功能f。我们讨论了其对分散优化的重要应用。
translated by 谷歌翻译
Data-driven models such as neural networks are being applied more and more to safety-critical applications, such as the modeling and control of cyber-physical systems. Despite the flexibility of the approach, there are still concerns about the safety of these models in this context, as well as the need for large amounts of potentially expensive data. In particular, when long-term predictions are needed or frequent measurements are not available, the open-loop stability of the model becomes important. However, it is difficult to make such guarantees for complex black-box models such as neural networks, and prior work has shown that model stability is indeed an issue. In this work, we consider an aluminum extraction process where measurements of the internal state of the reactor are time-consuming and expensive. We model the process using neural networks and investigate the role of including skip connections in the network architecture as well as using l1 regularization to induce sparse connection weights. We demonstrate that these measures can greatly improve both the accuracy and the stability of the models for datasets of varying sizes.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译
Training of a Machine Learning model requires sufficient data. The sufficiency of the data is not always about the quantity, but about the relevancy and reduced redundancy. Data-generating processes create massive amounts of data. When used raw, such big data is causing much computational resource utilization. Instead of using the raw data, a proper Condensed Representation can be used instead. Combining K-means, a well-known clustering method, with some correction and refinement facilities a novel Condensed Representation method for Machine Learning applications is introduced. To present the novel method meaningfully and visually, synthetically generated data is employed. It has been shown that by using the condensed representation, instead of the raw data, acceptably accurate model training is possible.
translated by 谷歌翻译
A digital twin is defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision-making. Unfortunately, the term remains vague and says little about its capability. Recently, the concept of capability level has been introduced to address this issue. Based on its capability, the concept states that a digital twin can be categorized on a scale from zero to five, referred to as standalone, descriptive, diagnostic, predictive, prescriptive, and autonomous, respectively. The current work introduces the concept in the context of the built environment. It demonstrates the concept by using a modern house as a use case. The house is equipped with an array of sensors that collect timeseries data regarding the internal state of the house. Together with physics-based and data-driven models, these data are used to develop digital twins at different capability levels demonstrated in virtual reality. The work, in addition to presenting a blueprint for developing digital twins, also provided future research directions to enhance the technology.
translated by 谷歌翻译
Speech translation (ST) is the task of directly translating acoustic speech signals in a source language into text in a foreign language. ST task has been addressed, for a long time, using a pipeline approach with two modules : first an Automatic Speech Recognition (ASR) in the source language followed by a text-to-text Machine translation (MT). In the past few years, we have seen a paradigm shift towards the end-to-end approaches using sequence-to-sequence deep neural network models. This paper presents our efforts towards the development of the first Broadcast News end-to-end Arabic to English speech translation system. Starting from independent ASR and MT LDC releases, we were able to identify about 92 hours of Arabic audio recordings for which the manual transcription was also translated into English at the segment level. These data was used to train and compare pipeline and end-to-end speech translation systems under multiple scenarios including transfer learning and data augmentation techniques.
translated by 谷歌翻译
Ordinary Differential Equations (ODE)-based models have become popular foundation models to solve many time-series problems. Combining neural ODEs with traditional RNN models has provided the best representation for irregular time series. However, ODE-based models require the trajectory of hidden states to be defined based on the initial observed value or the last available observation. This fact raises questions about how long the generated hidden state is sufficient and whether it is effective when long sequences are used instead of the typically used shorter sequences. In this article, we introduce CrossPyramid, a novel ODE-based model that aims to enhance the generalizability of sequences representation. CrossPyramid does not rely only on the hidden state from the last observed value; it also considers ODE latent representations learned from other samples. The main idea of our proposed model is to define the hidden state for the unobserved values based on the non-linear correlation between samples. Accordingly, CrossPyramid is built with three distinctive parts: (1) ODE Auto-Encoder to learn the best data representation. (2) Pyramidal attention method to categorize the learned representations (hidden state) based on the relationship characteristics between samples. (3) Cross-level ODE-RNN to integrate the previously learned information and provide the final latent state for each sample. Through extensive experiments on partially-observed synthetic and real-world datasets, we show that the proposed architecture can effectively model the long gaps in intermittent series and outperforms state-of-the-art approaches. The results show an average improvement of 10\% on univariate and multivariate datasets for both forecasting and classification tasks.
translated by 谷歌翻译